683 stories
·
0 followers

Large study squashes anti-vaccine talking points about aluminum

1 Share

A sweeping analysis of health data from more than 1.2 million children in Denmark born over a 24-year period found no link between the small amounts of aluminum in vaccines and a wide range of health conditions—including asthma, allergies, eczema, autism, and attention deficit-hyperactivity disorder (ADHD).

The finding, published in the Annals of Internal Medicine, firmly squashes a persistent anti-vaccine talking point that can give vaccine-hesitant parents pause.

Small amounts of aluminum salts have been added to vaccines for decades as adjuvants, that is, components of the vaccine that help drum up protective immune responses against a target germ. Aluminum adjuvants can be found in a variety of vaccines, including those against diphtheria, tetanus, and pertussis, Haemophilus influenzae type b (Hib), and hepatitis A and B.

Despite decades of use worldwide and no clear link to harms, concern about aluminum and cumulative exposures continually resurfaces—largely thanks to anti-vaccine advocates who fearmonger about the element. A leader of such voices is Robert F. Kennedy Jr, the current US health secretary and an ardent anti-vaccine advocate.

In a June 2024 interview with podcaster Joe Rogan, Kennedy falsely claimed that aluminum is "extremely neurotoxic" and "give[s] you allergies." The podcast has racked up nearly 2 million views on YouTube. Likewise, Children's Health Defense, the rabid anti-vaccine organization Kennedy created in 2018, has also made wild claims about the safety of aluminum adjuvants. That includes linking it to autism, despite that many high-quality scientific studies have found no link between any vaccines and autism.

While anti-vaccine advocates like Kennedy routinely dismiss and attack the plethora of studies that do not support their dangerous claims, the new study should reassure any hesitant parents.

Clear data, unclear future

For the study, lead author Niklas Worm Andersson, of the Statens Serum Institut in Copenhagen, and colleagues tapped into Denmark's national registry to analyze medical records of over 1.2 million children born in the country between 1997 and 2018. During that time, new vaccines were introduced and recommendations shifted, creating variation in how many aluminum-containing vaccines children received.

The researchers calculate cumulative vaccine-based aluminum exposure for each child, which spanned 0 mg to 4.5 mg at age 2. They then looked for associations between those exposures and 50 chronic conditions. Those chronic conditions spanned autoimmune, allergic, atopic, and neurodevelopmental disorders.

The results were clear across the board; there was no statistically significant increased risk for any of the 50 conditions examined. The study's stats couldn't entirely rule out the possibility of very small relative increased risks (1 percent to 2 percent) for some rare conditions analyzed, but overall, they did rule out meaningful increases over the range of conditions. Aluminum adjuvants are not a health concern.

Still, it's uncertain if the fresh data will keep aluminum-containing vaccines out of Kennedy's crosshairs. Last month, Bloomberg reported that Kennedy was considering asking his hand-picked vaccine advisory committee to review aluminum in vaccines. The committee—the Advisory Committee on Immunization Practices (ACIP)— shapes the Centers for Disease Control and Prevention's immunization schedule, which sets vaccination recommendations nationwide and determines which vaccines are covered by health insurance plans.

Kennedy's reconstituted ACIP has little expertise in vaccines and has embraced anti-vaccine views. For instance, in its first meeting at the end of June, Kennedy's ACIP voted to drop long-standing CDC recommendations for flu vaccines that contain mercury-based preservative thimerosal based on an anti-vaccine presentation from the former president of Kennedy's anti-vaccine groups, Children's Health Defense. Thimerosal, like aluminum adjuvants, has been safely used for decades around the world but has long been a target of anti-vaccine advocates. If the new ACIP similarly reviews and votes against aluminum adjuvants, it would jeopardize the availability of at least two dozen vaccines, Bloomberg reported, citing sources familiar with the matter.

It’s unclear if Kennedy will pursue an ACIP review of aluminum-containing vaccines. The Department of Health and Human Services declined to comment on the matter to Bloomberg, and ACIP members did not specifically mention future plans to review aluminum adjuvants when they met in June.

Read full article

Comments



Read the whole story
Share this story
Delete

DOGE Denizen Marko Elez Leaked API Key for xAI

1 Share

Marko Elez, a 25-year-old employee at Elon Musk’s Department of Government Efficiency (DOGE), has been granted access to sensitive databases at the U.S. Social Security Administration, the Treasury and Justice departments, and the Department of Homeland Security. So it should fill all Americans with a deep sense of confidence to learn that Mr. Elez over the weekend inadvertently published a private key that allowed anyone to interact directly with more than four dozen large language models (LLMs) developed by Musk’s artificial intelligence company xAI.

Image: Shutterstock, @sdx15.

On July 13, Mr. Elez committed a code script to GitHub called “agent.py” that included a private application programming interface (API) key for xAI. The inclusion of the private key was first flagged by GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, said the exposed API key allowed access to at least 52 different LLMs used by xAI. The most recent LLM in the list was called “grok-4-0709” and was created on July 9, 2025.

Grok, the generative AI chatbot developed by xAI and integrated into Twitter/X, relies on these and other LLMs (a query to Grok before publication shows Grok currently uses Grok-3, which was launched in Feburary 2025). Earlier today, xAI announced that the Department of Defense will begin using Grok as part of a contract worth up to $200 million. The contract award came less than a week after Grok began spewing antisemitic rants and invoking Adolf Hitler.

Mr. Elez did not respond to a request for comment. The code repository containing the private xAI key was removed shortly after Caturegli notified Elez via email. However, Caturegli said the exposed API key still works and has not yet been revoked.

“If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors,” Caturegli told KrebsOnSecurity.

Prior to joining DOGE, Marko Elez worked for a number of Musk’s companies. His DOGE career began at the Department of the Treasury, and a legal battle over DOGE’s access to Treasury databases showed Elez was sending unencrypted personal information in violation of the agency’s policies.

While still at Treasury, Elez resigned after The Wall Street Journal linked him to social media posts that advocated racism and eugenics. When Vice President J.D. Vance lobbied for Elez to be rehired, President Trump agreed and Musk reinstated him.

Since his re-hiring as a DOGE employee, Elez has been granted access to databases at one federal agency after another. TechCrunch reported in February 2025 that he was working at the Social Security Administration. In March, Business Insider found Elez was part of a DOGE detachment assigned to the Department of Labor.

Marko Elez, in a photo from a social media profile.

In April, The New York Times reported that Elez held positions at the U.S. Customs and Border Protection and the Immigration and Customs Enforcement (ICE) bureaus, as well as the Department of Homeland Security. The Washington Post later reported that Elez, while serving as a DOGE advisor at the Department of Justice, had gained access to the Executive Office for Immigration Review’s Courts and Appeals System (EACS).

Elez is not the first DOGE worker to publish internal API keys for xAI: In May, KrebsOnSecurity detailed how another DOGE employee leaked a private xAI key on GitHub for two months, exposing LLMs that were custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X.

Caturegli said it’s difficult to trust someone with access to confidential government systems when they can’t even manage the basics of operational security.

“One leak is a mistake,” he said. “But when the same type of sensitive key gets exposed again and again, it’s not just bad luck, it’s a sign of deeper negligence and a broken security culture.”

Read the whole story
Share this story
Delete

Study finds AI tools made open source software developers 19 percent slower

1 Share

When it comes to concrete use cases for large language models, AI companies love to point out the ways coders and software developers can use these models to increase their productivity and overall efficiency in creating computer code. However, a new randomized controlled trial has found that experienced open source coders became less efficient at coding-related tasks when they used current AI tools.

For their study, researchers at METR (Model Evaluation and Threat Research) recruited 16 software developers, each with multiple years of experience working on specific open source repositories. The study followed these developers across 246 individual "tasks" involved with maintaining those repos, such as "bug fixes, features, and refactors that would normally be part of their regular work." For half of those tasks, the developers used AI tools like Cursor Pro or Anthropic's Claude; for the others, the programmers were instructed not to use AI assistance. Expected time forecasts for each task (made before the groupings were assigned) were used as a proxy to balance out the overall difficulty of the tasks in each experimental group, and the time needed to fix pull requests based on reviewer feedback was included in the overall assessment.

Experts and the developers themselves expected time savings that didn't materialize when AI tools were actually used. Credit: METR

Before performing the study, the developers in question expected the AI tools would lead to a 24 percent reduction in the time needed for their assigned tasks. Even after completing those tasks, the developers believed that the AI tools had made them 20 percent faster, on average. In reality, though, the AI-aided tasks ended up being completed 19 percent slower than those completed without AI tools.

Trade-offs

By analyzing screen recording data from a subset of the studied developers, the METR researchers found that AI tools tended to reduce the average time those developers spent actively coding, testing/debugging, or "reading/searching for information." But those time savings were overwhelmed in the end by "time reviewing AI outputs, prompting AI systems, and waiting for AI generations," as well as "idle/overhead time" where the screen recordings show no activity.

Overall, the developers in the study accepted less than 44 percent of the code generated by AI without modification. A majority of the developers reported needing to make changes to the code generated by their AI companion, and a total of 9 percent of the total task time in the "AI-assisted" portion of the study was taken up by this kind of review.

Time saved on things like active coding was overwhelmed by the time needed to prompt, wait on, and review AI outputs in the study. Credit: METR

On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency.

Many of the existing coding benchmarks also focus on synthetic, algorithmically scorable tasks created specifically for the benchmark test, making it hard to compare those results to those focused on work with pre-existing, real-world code bases. Along those lines, the developers in METR's study reported in surveys that the overall complexity of the repos they work with (which average 10 years of age and over 1 million lines of code) limited how helpful the AI could be. The AI wasn't able to utilize "important tacit knowledge or context" about the codebase, the researchers note, while the "high developer familiarity with [the] repositories" aided their very human coding efficiency in these tasks.

These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

And even for complex coding projects like the ones studied, the researchers are also optimistic that further refinement of AI tools could lead to future efficiency gains for programmers. Systems that have better reliability, lower latency, or more relevant outputs (via techniques such as prompt scaffolding or fine-tuning) "could speed up developers in our setting," the researchers write. Already, they say there is "preliminary evidence" that the recent release of Claude 3.7 "can often correctly implement the core functionality of issues on several repositories that are included in our study."

For now, however, METR's study provides some strong evidence that AI's much-vaunted usefulness for coding tasks may have significant limitations in certain complex, real-world coding scenarios.

Read full article

Comments



Read the whole story
Share this story
Delete

Bondi drops case on doc accused of giving kids saline shots instead of vaccines

1 Share

A Utah-based plastic surgeon appears to be off the hook for federal charges over an alleged COVID-19 vaccine fraud scheme, in which he and three of his associates were accused of providing fraudulent COVID-19 vaccination cards at $50 a pop while squirting the corresponding vaccines down the drain—wasting roughly $28,000 worth of federally provided, lifesaving vaccines. In cases where parents brought in children for fake immunizations, the group would allegedly inject saline solutions at the parents' request to make the children believe they had received vaccinations.

In total, the group was accused of wasting 1,937 COVID-19 vaccine doses between October 2021 and September 2022, including 391 pediatric doses, and creating fraudulent immunization records for them. The alleged scheme netted them nearly $97,000.

The charges were filed in January 2023 under the Biden administration after two separate undercover agents went through the scheme to get a fake vaccination card. The plastic surgeon, Michael Kirk Moore Jr., who owns and operates Plastic Surgery Institute of Utah in Midvale, south of Salt Lake City, as well as the business' office manager, Kari Dee Burgoyne, its receptionist, Sandra Flores, and Moore's neighbor, Kristin Jackson Andersen, were charged in the case. All four people faced charges of conspiracy to defraud the federal government, along with two counts related to improper disposal of government property.

In a statement at the time of the charges, Curt Muller, special agent in charge with the Department of Health and Human Services for the Office of the Inspector General, said that by allegedly giving sham shots to children, "not only did [Moore] endanger the health and well-being of a vulnerable population, but also undermined public trust and the integrity of federal health care programs."

The trial proceedings against the four had begun recently. But on Saturday, Attorney General Pam Bondi posted on social media that " At my direction @TheJusticeDept has dismissed charges against Dr. Kirk Moore. Dr. Moore gave his patients a choice when the federal government refused to do so. He did not deserve the years in prison he was facing. It ends today."

Also on Saturday, Acting United States Attorney Felice John Viti filed a motion to dismiss the case. The motion stated that "The basis for the motion is that such dismissal is in the interests of justice."

Media outlets have noted that Robert F. Kennedy Jr., US health secretary and ardent anti-vaccine advocate, has championed Moore. In April, Kennedy wrote on social media that "Dr Moore deserves a medal for his courage and his commitment to healing!"

Read full article

Comments



Read the whole story
Share this story
Delete

'Firefox is Fine. The People Running It are Not'

3 Shares
"Firefox is dead to me," wrote Steven J. Vaughan-Nichols last month for The Register, complaining about everything from layoffs at Mozilla to Firefox's discontinuation of Pocket and Fakespot, its small market share, and some user complaints that the browser might be becoming slower. But a new rebuttal (also published by The Register) argues instead that Mozilla just has "a management layer that doesn't appear to understand what works for its product nor which parts of it matter most to users..." "Steven's core point is correct. Firefox is in a bit of a mess — but, seriously, not such a bad mess. You're still better off with it — or one of its forks, because this is FOSS — than pretty much any of the alternatives." Like many things, unfortunately, much of computing is run on feelings, tradition, and group loyalties, when it should use facts, evidence, and hard numbers. Don't bother saying Firefox is getting slower. It's not. It's faster than it has been in years. Phoronix, the go-to site for benchmarks on FOSS stuff, just benchmarked 21 versions, and from late 2023 to now, Firefox has steadily got faster and faster... Ever since Firefox 1.0 in 2004, Firefox has never had to compete. It's been attached like a mosquito to an artery to the Google cash firehose... Mozilla's leadership is directionless and flailing because it's never had to do, or be, anything else. It's never needed to know how to make a profit, because it never had to make a profit. It's no wonder it has no real direction or vision or clue: it never needed them. It's role-playing being a business. Like we said, don't blame the app. You're still better off with Firefox or a fork such as Waterfox. Chrome even snoops on you when in incognito mode... One observer has been spectating and commentating on Mozilla since before it was a foundation — one of its original co-developers, Jamie Zawinksi... Zawinski has repeatedly said: "Now hear me out, but What If...? browser development was in the hands of some kind of nonprofit organization?" "In my humble but correct opinion, Mozilla should be doing two things and two things only: — Building THE reference implementation web browser, and — Being a jugular-snapping attack dog on standards committees. — There is no 3." Perhaps this is the only viable resolution. Mozilla, for all its many failings, has invented a lot of amazing tech, from Rust to Servo to the leading budget phone OS. It shouldn't be trying to capitalize on this stuff. Maybe encourage it to have semi-independent spinoffs, such as Thunderbird, and as KaiOS ought to be, and as Rust could have been. But Zawinski has the only clear vision and solution we've seen yet. Perhaps he's right, and Mozilla should be a nonprofit, working to fund the one independent, non-vendor-driven, standards-compliant browser engine.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

AI Slows Down Some Experienced Software Developers, Study Finds

1 Share
An anonymous reader quotes a report from Reuters: Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found. AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with. Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%. The study's lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected "a 2x speed up, somewhat obviously." [...] The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested. "When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed," Becker said. The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren't familiar with. Still, the majority of the study's participants, as well as the study's authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page. "Developers have goals other than completing the task as soon as possible," Becker said. "So they're going with this less effortful route."

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete
Next Page of Stories